With the drive to create a decentralized digital economy, Web 3.0 has become a cornerstone of digital transformation, developed on the basis of computing-force networking, distributed data storage, and blockchain. With the rapid realization of quantum devices, Web 3.0 is being developed in parallel with the deployment of quantum cloud computing and quantum Internet. In this regard, quantum computing first disrupts the original cryptographic systems that protect data security while reshaping modern cryptography with the advantages of quantum computing and communication. Therefore, in this paper, we introduce a quantum blockchain-driven Web 3.0 framework that provides information-theoretic security for decentralized data transferring and payment transactions. First, we present the framework of quantum blockchain-driven Web 3.0 with future-proof security during the transmission of data and transaction information. Next, we discuss the potential applications and challenges of implementing quantum blockchain in Web 3.0. Finally, we describe a use case for quantum non-fungible tokens (NFTs) and propose a quantum deep learning-based optimal auction for NFT trading to maximize the achievable revenue for sufficient liquidity in Web 3.0. In this way, the proposed framework can achieve proven security and sustainability for the next-generation decentralized digital society.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
在支持计算和通信技术的支持下,元评估有望为用户带来前所未有的服务体验。但是,元用户数量的增加对网络资源的需求量很大,尤其是用于基于图形扩展现实并需要渲染大量虚拟对象的荟萃分析服务。为了有效利用网络资源并改善体验质量(QOE),我们设计了一个注意力吸引网络资源分配方案,以实现定制的元评估服务。目的是将更多的网络资源分配给用户更感兴趣的虚拟对象。我们首先讨论与荟萃服务有关的几种关键技术,包括QOE分析,眼睛跟踪和远程渲染。然后,我们查看现有的数据集,并提出用户对象注意级别(UOAL)数据集,该数据集包含30个用户对1,000张图像中96个对象的地面意义。提供有关如何使用UOAL的教程。在UOAL的帮助下,我们提出了一种注意力感知的网络资源分配算法,该算法有两个步骤,即注意力预测和QOE最大化。特别是,我们概述了两种类型的注意力预测方法的设计,即兴趣感知和时间感知预测。通过使用预测的用户对象 - 注意值,可以最佳分配边缘设备的渲染能力等网络资源以最大化QOE。最后,我们提出了与荟萃服务有关的有前途的研究指示。
translated by 谷歌翻译
在本文中,我们专注于在多源大规模数据上学习有效的实体匹配模型。对于真实应用,我们放松源代码分布/空间或实体身份的典型假设,并提出了一个轻松的多源大规模实体匹配(RMLE)问题。问题的挑战包括1)如何对准源之间的大规模实体来共享信息和2)如何从联合学习多源数据中减轻负转移。更糟糕的是,一个实际问题是两个挑战之间的纠缠。具体地,不正确的对准可能增加负转移;虽然减轻一个来源的负面转移可能导致其他来源的知名度较差,然后降低对准精度。为了处理纠缠挑战,指出关键是基于Pareto正面优化首先优化信息共享,通过表示信息共享显着影响帕累托前部,描绘了负转移的下限。因此,我们提出了一种激励兼容的帕累托对准(ICPA)方法,首先基于Pareto正面优化优化跨源对准,然后减轻在优化的对准上约束的负转移。这种机制呈现每个来源可以根据其真实的偏好来学习,而无需担心恶化的其他来源的表示。具体地,帕累托前线优化促使最小化负转移的下限,这优化了是否对齐。提供了四个大型数据集的综合实证评估结果,以证明ICPA的有效性和优越性。搜索广告平台的在线A / B测试结果还展示了ICPA在生产环境中的有效性。
translated by 谷歌翻译
现代软件系统和产品越来越依赖机器学习模型,以基于与用户和系统的交互进行数据驱动的决策,例如计算基础架构。对于更广泛的采用,这种做法必须(i)容纳没有ML背景的软件工程师,并提供(ii)提供优化产品目标的机制。在这项工作中,我们描述了一般原则和特定的端到端毫升平台,为决策和反馈集合提供易于使用的API。循环仪支持从在线数据收集到模拟培训,部署,推理的完整端到端ML生命周期,并扩展支持和调整产品目标的评估和调整。我们概述了平台架构和生产部署的整体影响 - 循环仪当前托管700毫升型号,每秒达到600万决定。我们还描述了学习曲线并总结了平台采用者的经验。
translated by 谷歌翻译
Modeling noise transition matrix is a kind of promising method for learning with label noise. Based on the estimated noise transition matrix and the noisy posterior probabilities, the clean posterior probabilities, which are jointly called Label Distribution (LD) in this paper, can be calculated as the supervision. To reliably estimate the noise transition matrix, some methods assume that anchor points are available during training. Nonetheless, if anchor points are invalid, the noise transition matrix might be poorly learned, resulting in poor performance. Consequently, other methods treat reliable data points, extracted from training data, as pseudo anchor points. However, from a statistical point of view, the noise transition matrix can be inferred from data with noisy labels under the clean-label-domination assumption. Therefore, we aim to estimate the noise transition matrix without (pseudo) anchor points. There is evidence showing that samples are more likely to be mislabeled as other similar class labels, which means the mislabeling probability is highly correlated with the inter-class correlation. Inspired by this observation, we propose an instance-specific Label Distribution Regularization (LDR), in which the instance-specific LD is estimated as the supervision, to prevent DCNNs from memorizing noisy labels. Specifically, we estimate the noisy posterior under the supervision of noisy labels, and approximate the batch-level noise transition matrix by estimating the inter-class correlation matrix with neither anchor points nor pseudo anchor points. Experimental results on two synthetic noisy datasets and two real-world noisy datasets demonstrate that our LDR outperforms existing methods.
translated by 谷歌翻译
有关连接车辆的高级研究最近针对将车辆到所有设施(V2X)网络与机器学习(ML)工具(ML)工具和分布式决策制定的集成。联合学习(FL)正在作为训练机器学习(ML)模型(包括V2X网络中的车辆)的新范式出现。与其将培训数据共享和上传到服务器,不如将模型参数(例如,神经网络的权重和偏见)更新,由大量的互连车辆种群应用,充当本地学习者。尽管有这些好处,但现有方法的局限性是集中式优化,它依靠服务器来汇总和融合本地参数,从而导致单个故障点和扩展问题的缺点,以增加V2X网络大小。同时,在智能运输方案中,从车载传感器收集的数据是多余的,这会降低聚合的性能。为了解决这些问题,我们探索了一个分散数据处理的新颖想法,并引入了用于网络内工具的联合学习框架,C-DFL(基于共识的分散联盟学习),以解决有关连接车辆的联合学习并提高学习质量的联盟学习。已经实施了广泛的仿真来评估C-DFL的性能,该表明C-DFL在所有情况下都胜过常规方法的性能。
translated by 谷歌翻译
单眼深度估计是计算机视觉社区的重要任务。尽管巨大的成功方法取得了出色的结果,但其中大多数在计算上都是昂贵的,并且不适用于实时推论。在本文中,我们旨在解决单眼深度估计的更实际的应用,该解决方案不仅应考虑精度,而且还应考虑移动设备上的推论时间。为此,我们首先开发了一个基于端到端学习的模型,其重量大小(1.4MB)和短的推理时间(Raspberry Pi 4上的27fps)。然后,我们提出了一种简单而有效的数据增强策略,称为R2 CROP,以提高模型性能。此外,我们观察到,只有一个单一损失术语训练的简单轻巧模型将遭受性能瓶颈的影响。为了减轻此问题,我们采用多个损失条款,在培训阶段提供足够的限制。此外,采用简单的动态重量重量策略,我们可以避免耗时的超参数选择损失项。最后,我们采用结构感知的蒸馏以进一步提高模型性能。值得注意的是,我们的解决方案在MAI&AIM2022单眼估计挑战中排名第二,Si-RMSE为0.311,RMSE为3.79,推理时间为37 $ ms $,在Raspberry Pi上进行了测试4.值得注意的是,我们提供了,我们提供了。挑战最快的解决方案。代码和模型将以\ url {https://github.com/zhyever/litedepth}发布。
translated by 谷歌翻译
自动化的腹部多器官分割是计算机辅助诊断腹部器官相关疾病的至关重要但具有挑战性的任务。尽管许多深度学习模型在许多医学图像分割任务中取得了显着的成功,但由于腹部器官的不同大小以及它们之间的含糊界限,腹部器官的准确分割仍然具有挑战性。在本文中,我们提出了一个边界感知网络(BA-NET),以分段CT扫描和MRI扫描进行腹部器官。该模型包含共享编码器,边界解码器和分割解码器。两个解码器都采用了多尺度的深度监督策略,这可以减轻可变器官尺寸引起的问题。边界解码器在每个量表上产生的边界概率图被用作提高分割特征图的注意。我们评估了腹部多器官细分(AMOS)挑战数据集的BA-NET,并获得了CT扫描的多器官分割的平均骰子分数为89.29 $ \%$,平均骰子得分为71.92 $ \%$ \%$ \% MRI扫描。结果表明,在两个分割任务上,BA-NET优于NNUNET。
translated by 谷歌翻译
肾脏结构细分是计算机辅助诊断基于手术的肾癌的至关重要但具有挑战性的任务。尽管许多深度学习模型在许多医学图像分割任务中取得了显着的成功,但由于肾脏肿瘤的尺寸可变,肾脏肿瘤及其周围环境之间的歧义范围可变,因此对计算机层析造影血管造影(CTA)图像的肾脏结构的准确分割仍然具有挑战性。 。在本文中,我们在CTA扫描中提出了一个边界感知网络(BA-NET),以分段肾脏,肾脏肿瘤,动脉和静脉。该模型包含共享编码器,边界解码器和分割解码器。两个解码器都采用了多尺度的深度监督策略,这可以减轻肿瘤大小可变的问题。边界解码器在每个量表上产生的边界概率图被用作提高分割特征图的注意。我们在肾脏解析(KIPA)挑战数据集上评估了BA-NET,并通过使用4倍的交叉验证来实现CTA扫描的肾脏结构细分的平均骰子得分为89.65 $ \%$。结果证明了BA-NET的有效性。
translated by 谷歌翻译